54 research outputs found

    A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Get PDF
    In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy

    Facial Expression Recognition Using Uniform Local Binary Pattern with Improved Firefly Feature Selection

    Get PDF
    Facial expressions are essential communication tools in our daily life. In this paper, the uniform local binary pattern is employed to extract features from the face. However, this feature representation is very high in dimensionality. The high dimensionality would not only affect the recognition accuracy but also can impose computational constraints. Hence, to reduce the dimensionality of the feature vector, the firefly algorithm is used to select the optimal subset that leads to better classification accuracy. However, the standard firefly algorithm suffers from the risk of being trapped in local optima after a certain number of generations. Hence, this limitation has been addressed by proposing an improved version of the firefly where the great deluge algorithm (GDA) has been integrated. The great deluge is a local search algorithm that helps to enhance the exploitation ability of the firefly algorithm, thus preventing being trapped in local optima. The improved firefly algorithm has been employed in a facial expression system. Experimental results using the Japanese female facial expression database show that the proposed approach yielded good classification accuracy compared to state-of-the-art methods. The best classification accuracy obtained by the proposed method is 96.7% with 1230 selected features, whereas, Gabor-SRC method achieved 97.6% with 2560 features

    Face Verification without False Acceptance

    Get PDF
    Principal Components Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular approaches in face recognition and verification. The methods are classified under appearance-based approach and are considered to be highly-correlated. The last factor deems a fusion of both methods to be unfavorable. Nevertheless the authors will demonstrate a verification performance in which the fusion of both method produces an improved rate compared to individual performance. Tests are carried out on FERET (Facial Recognition Technology) database using a modified protocol. A major drawback in applying LDA is that it requires a large set of individual face images sample to extract the intra-class variations. In real life application data enrolment incurs costs such as human time and hardware setup. Tests are therefore conducted using virtual images and its performance and behaviour recorded as an option for multiple sample. The FERET database is chosen because it is widely used by researchers and published results are available for comparisons. Performance is presented as the rate of verification when false acceptance rate is zero, in other words, no impostors allowed. Initial results using fusion of two verification experts shows that a fusion of T-Zone LDA with Gabor LDA of whole face produces the best verification rate of 98.2% which is over 2% improvement compared with the best individual expert

    An image reconstruction algorithm for a dual modality tomographic system.

    Get PDF
    This thesis describes an investigation into the use of dual modality tomography to measure component concentrations within a cross-section. The benefits and limitations of using dual modality compared with single modality are investigated and discussed. A number of methods are available to provide imaging systems for process tomography applications and seven imaging techniques are reviewed. Two modalities of tomography were chosen for investigation (i.e. Electrical Impedance Tomography (EIT) and optical tomography) and the proposed dual modality system is presented. Image reconstruction algorithms for EIT (based on modified Newton-Raphson method), optical tomography (based on back-projection method) and with both modalities combined together to produce a single tomographic imaging system are described, enabling comparisons to be made between the individual and combined modalities.To analyse the performance of the image reconstruction algorithms used in the EIT, optical tomography and dual modality investigations, a sequence of reconstructions using a series of phantoms is performed on a simulated vessel. Results from two distinct cases are presented, a) simulation of a vertical pipe in which the cross-section is filled with liquid or liquid and objects being imaged and b) simulation of a horizontal pipe where the conveying liquid level may vary from pipe full down to 14% of liquid. A computer simulation of an EIT imaging system based on a 16 electrode sensor array is used. The quantitative images obtained from simulated reconstruction are compared in term of percentage area with the actual cross-section of the model. It is shown from the results that useful reconstructions may be obtained with widely differing levels of liquid, despite the limitations in accuracy of the reconstructions. The test results obtained using the phantoms with optical tomography, based on two projections each of sixteen views, show that the images produced agree closely on a quantitative basis with the physical models. The accuracy of the optical reconstructions, neglecting the effects of aliasing due to only two projections, is much higher than for the EIT reconstructions. Neglecting aliasing, the measured accuracies range from 0.1% to 0.8% for the pipe filled with water. For the sewer condition, i.e. the pipe not filled with water, the major phase is measured with an accuracy of 1% to 3.4%. For the single optical modality the minor components are measured with accuracies 6.6% to 19%. The test results obtained using the phantoms show that the images produced by combining both EIT and optical tomography method agree quantitatively with the physical models. The EIT eliminates most of the aliasing and the results now show that the optical part of the system provides accuracies for the minor components in the range 1% to 5%. It is concluded that the dual modality system shows a measurable increase in accuracy compared with the single modality systems. The dual modality system should be investigated further using laboratory flow rigs in order to check accuracies and determine practical limitations. Finally, suggestions for future work on improving the accuracy, speed and resolution of the dual modality imaging system is presented

    Incident and Traffic-Bottleneck Detection Algorithm in High-Resolution Remote Sensing Imagery

    Get PDF
    One  of  the  most  important  methods  to  solve  traffic  congestion  is  to detect the incident state of a roadway. This paper describes the development of a method  for  road  traffic  monitoring  aimed  at  the  acquisition  and  analysis  of remote  sensing  imagery.  We  propose  a  strategy  for  road  extraction,  vehicle detection  and incident detection  from remote sensing imagery using techniques based on neural networks, Radon transform  for angle detection and traffic-flow measurements.  Traffic-bottleneck  detection  is  another  method  that  is  proposed for recognizing incidents in both offline and real-time mode. Traffic flows and incidents are extracted from aerial images of bottleneck zones. The results show that the proposed approach has a reasonable detection performance compared to other methods. The best performance of the learning system was a detection rate of 87% and a false alarm rate of less than 18% on 45 aerial images of roadways. The performance of the traffic-bottleneck detection  method had a detection rate of 87.5%

    Additional feet-on-the-street deployment method for indexed crime prevention initiative

    Get PDF
    Under the National Key Result Area (NKRA) Safe City Program’s (SCP) Safe City Monitoring System (SCMS) initiative, the Royal Malaysian Police (RMP) manages the deployment of feet-on-the-street via the indexed crime hotspots. Working on an approach known as the Repeat Location Finder (RLF), the RMP determines the displacement of indexed crime on the hotspots and may deploy feet-on-the-streets at the identified displacement areas as crime prevention measures. This paper introduces another deployment capability by shifting the focus from the hotspots to the identified serial suspects. Displacement models work on the concentration of crime incidents and the propensity location where the concentration might shift to the surrounding immediate hotspots. This additional method on the other hand, works on the identified suspects and identifies the next location where the suspects might surface, which may take place beyond the distance and boundaries of the hotspots. The objective of this paper is to identify the spatial features that positively contribute towards this new method. The solutions to the objective have been tested on a dataset made available by the RMP comprising 74 serial criminal suspects around the areas of Selangor, Kuala Lumpur and Putrajaya, spanning from Jan 1st to Dec 31st 2013. The identification capability moves as high as 92.86%. The RMP has been presented with the results of this paper and it was concluded that this method may be applicable as another capability in managing the deployment of feet-on-the-street resources

    Position and Obstacle Avoidance Algorithm in Robot Soccer

    Get PDF
    Problem statement: Robot soccer is an attractive domain for researchers and students working in the field of autonomous robots. However developing (coding, testing and debugging) robots for such domain is a rather complex task. Approach: This study concentrated on developing position and obstacle avoidance algorithm in robot soccer. This part is responsible for realizing soccer skills such as movement, shoot and goal keeping. The formulation of position and obstacle avoidance was based on mathematical approach. This formula is to make sure that the movement of the robot is valid. Velocity of the robot was calculated to set the speed of the robot. The positioning theory including the coordination of the robot (x,y) was used to find the obstacle and avoid it. Results: Some simulations and testing had been carried out to evaluate the usefulness of the proposed algorithms. The functions for shooting, movement and obstacle avoidance had been successfully implemented. Conclusion: The results showed its possibility could be used as strategy algorithms in real robot soccer competition

    A heuristic model for optimizing fuzzy knowledge base in a pattern recognition system

    No full text
    341-347This study presents a genetic algorithm (GA) to optimize performance of a fuzzy system for reconition of facial expression from images. In proposed model, a Mamdani-type fuzzy rule based system recognizes emotions, and a GA is used to improve accuracy and robustness of the system. To evaluate system performance, images from FG-Net (FEED) and Cohn-Kanade database were used to obtain the best functions parameters. Proposed model under training process not only increased accuracy rate of emotion recognition but also increased validity of the model in adverse conditions

    A Novel Public Key Image Encryption Based on Elliptic Curves over Prime Group Field

    No full text
    Abstract-In this paper an encryption technique is proposed based on elliptic curves for securing images to transmit over public channels. This cryptosystem also utilize a new mapping method is introduced to convert every pixel of plain image into a point on an elliptic curve, which is a mandatory prerequisite for any ECC based encryption. Encryption and decryption process are given in details with implementation. After applying encryption, security analysis is performed to evaluate the robustness of proposed technique to statistical attacks
    corecore